New Technology / Military Ai
Technology signals, innovation themes, and applied engineering trends. Topic: Military-Ai. Updated briefs and structured summaries from curated sources.
Is Big Tech buying the AI debate? NY Assemblyman Alex Bores weighs in | Equity Podcast
Is Big Tech buying the AI debate? NY Assemblyman Alex Bores weighs in | Equity Podcast
2026-02-27T17:21:17Z
Full timeline
0.0–300.0
The Department of Defense is urging Anthropic to permit unrestricted military use of its AI, raising safety and regulatory concerns. Public opinion is polarized, with many advocating for responsible AI deployment amidst fears of rapid technological advancement.
  • The Department of Defense is pressuring Anthropic to allow unrestricted use of its AI in military applications. This raises concerns about safety and regulation
  • Public sentiment is divided into two camps: those who believe AI will save humanity and those who fear its potential dangers. However, many people occupy a middle ground, advocating for responsible AI deployment
  • Alex Bores, a New York State Assemblymember, has become a target for Silicon Valley billionaires after sponsoring the RAISE Act, New Yorks first AI safety law
  • Bores argues that the leading voices in Silicon Valley represent a small minority. He believes that most Americans have concerns about AIs rapid development and its implications
  • Bores emphasizes his unique qualifications, including a masters degree in computer science and experience in the tech industry. He believes this background helps him understand the complexities of AI regulation
  • The RAISE Act is significant because it is the only bill targeted by an executive order aimed at limiting state regulation of AI. Bores highlights that he has successfully enacted this legislation despite opposition
300.0–600.0
The RAISE Act mandates major AI companies to develop and publicly commit to safety plans, enhancing accountability for safety incidents. This legislation targets companies generating over $500 million in revenue, including Google, Meta, OpenAI, and Anthropic.
  • The RAISE Act, sponsored by Alex Bores, requires major AI companies to create and publicly commit to safety plans. This ensures accountability for critical safety incidents
  • Both the RAISE Act and Californias SB 53 share similar goals. However, the RAISE Act includes stronger provisions in several areas, enhancing its regulatory framework
  • Companies like Google, Meta, OpenAI, and Anthropic, which generate over $500 million in revenue, are primarily targeted by these regulations. This aims to ensure they adhere to safety standards
  • Bores argues that the pushback from Silicon Valley against the RAISE Act stems from a desire to avoid regulation. They prefer that federal standards govern AI instead of state-level initiatives
  • The opposition has shifted its messaging from honest critiques of AI regulation to more sensational attacks. This includes attempts to link Bores to controversial issues like immigration enforcement
  • Bores emphasizes that the majority of Americans support some form of AI regulation. This contradicts the narrative pushed by his opponents, who claim that regulation is unpopular
600.0–900.0
Public First Action is a political action committee advocating for AI regulation, formed in response to significant opposition funding. Anthropic has contributed $20 million to this PAC, indicating a shift towards supporting transparency and reasonable regulations in AI.
  • Public First Action is a new political action committee that supports regulating AI. It aims to provide reasonable guardrails for the technology
  • The PAC was formed in response to the significant financial backing of $125 million from Leading the Future, which opposes AI regulation
  • Anthropic has backed Public First Action with $20 million. This indicates their support for transparency and reasonable regulations in AI
  • Alex Bores had discussions with Anthropic during the development of the RAISE Act. However, they were not initial supporters of the legislation
  • Bores emphasizes the importance of support from engineers and employees at major tech companies. Many favor reasonable regulations despite opposition from executives
  • Criticism of effective altruism often comes from groups like Leading the Future. They resort to name-calling instead of engaging in honest debate about AI regulation
  • The growing number of state-level AI laws reflects the lack of a federal standard. This situation leads to increased tension between state rights and the AI industrys push for regulation
900.0–1200.0
Silicon Valley's political influence is underscored by significant campaign contributions aimed at opposing pro-regulation candidates, with Meta investing $65 million in super PACs. The current regulatory debate centers on whether any regulation should exist, with upcoming legislation focusing on AI model transparency regarding training data.
  • Silicon Valleys influence in politics is evident as they spend significant amounts to oppose pro-regulation candidates. For instance, Meta has allocated $65 million to super PACs that support candidates favorable to the tech industry
  • The disparity in campaign funding is stark. Leading the Future has pledged $10 million against pro-regulation candidates, which is 20 times more than what pro-regulation PACs have spent in support of those candidates
  • Conversations with founders of Leading the Future have been limited. Only one founder engaged in a brief discussion, but legislators prioritize communication despite the lack of policy changes
  • The current battle in AI regulation centers on whether any regulation should exist at all. Winning this initial fight is crucial before discussions can progress to the specifics of what regulations should entail
  • Upcoming legislation includes a bill requiring AI models to disclose information about their training data. This bill aims to clarify the types of data used, including copyright material and personally identifiable information
  • Deep fakes present a solvable issue if appropriate policies are implemented. Focusing on content provenance could lead to effective solutions for managing the challenges posed by deep fakes
1200.0–1500.0
Alex Bores is advocating for AI regulation through proposed bills that require AI models to disclose their training data. His broader national plan includes 41 sub-points aimed at responsible AI development and oversight.
  • Alex Bores is advocating for AI regulation through his proposed bills. These include requirements for AI models to disclose information about their training data, including whether they use copyrighted material or personally identifiable information
  • Bores believes that deepfakes can be effectively managed with the right policies. He is promoting the use of an open-source standard called C2PA to help address content provenance issues
  • Bores is optimistic about passing his bill related to content provenance, which was previously stalled in the Senate. The governor has included it in her budget, indicating potential support for the legislation
  • Bores has a broader national plan for AI regulation that covers various aspects of AI governance. This plan includes 41 sub-points detailing his vision for responsible AI development and oversight
  • To learn more about Bores initiatives, individuals can visit his website. There, they can find detailed information about his AI framework, and he is also active on social media under the handle @AlexBores
  • Bores emphasizes the importance of transparency in AI development. He argues that without proper oversight, the rapid advancement of AI could lead to significant societal challenges